Saliency methods compute heat maps that highlight portions of an input that were most {\em important} for the label assigned to it by a deep net. Evaluations of saliency methods convert this heat map into a new {\em masked input} by retaining the $k$ highest-ranked pixels of the original input and replacing the rest with \textquotedblleft uninformative\textquotedblright\ pixels, and checking if the net's output is mostly unchanged. This is usually seen as an {\em explanation} of the output, but the current paper highlights reasons why this inference of causality may be suspect. Inspired by logic concepts of {\em completeness \& soundness}, it observes that the above type of evaluation focuses on completeness of the explanation, but ignores soundness. New evaluation metrics are introduced to capture both notions, while staying in an {\em intrinsic} framework -- i.e., using the dataset and the net, but no separately trained nets, human evaluations, etc. A simple saliency method is described that matches or outperforms prior methods in the evaluations. Experiments also suggest new intrinsic justifications, based on soundness, for popular heuristic tricks such as TV regularization and upsampling.
translated by 谷歌翻译
深度网络泛化界限的研究旨在使用仅使用训练数据集和网络参数来预测测试错误的方法。虽然泛化界限可以给出关于架构设计,培训算法等的许多见解,但他们目前无法做些什么是对实际测试错误的良好预测。最近引入的深度学习竞争中的预测概括旨在鼓励发现更好地预测测试错误的方法。目前的论文调查了一个简单的想法:可以使用使用在同一训练数据集上培训的生成对冲网络(GaN)产生的“合成数据”来预测测试错误?在调查几个GAN模型和架构后,我们发现这结果是这种情况。实际上,使用预先培训的标准数据集预先培训的GAN,可以预测测试错误而不需要任何额外的超参数调谐。这一结果令人惊讶,因为GAN具有众所周知的限制(例如模式崩溃),并且已知不准确地学习数据分布。然而,所生成的样本足以替代测试数据。提出了几个额外的实验以探讨Gans在这项任务中做得好的原因。除了一种预测概括的新方法外,我们工作中展示的反向直观现象还可以更好地了解GANS的优势和局限性。
translated by 谷歌翻译
自我监督的表示学习解决辅助预测任务(称为借口任务),而不需要标记数据以学习有用的语义表示。这些借口任务仅使用输入特征,例如预测缺失的图像修补程序,从上下文中恢复图像的颜色通道,或者预测文本中的缺失单词;然而,预测该\ Texit {已知}信息有助于学习对下游预测任务的学习陈述。我们提供利用某些{\ EM重建}借口任务之间的统计连接的机制,以保证学习良好代表性。正式地,我们量化了借口任务的组件之间的近似独立性(标签和潜在变量的条件)允许我们学习可以通过训练在学习表示的顶部的线性层来解决下游任务的表示。我们证明了线性层即使对于复杂的地面真理函数类,也会产生小的近似误差,并且将急剧减少标记的样本复杂性。接下来,我们展示了我们方法的简单修改,导致非线性CCA,类似于流行的Simsiam算法,并显示了非线性CCA的类似保证。
translated by 谷歌翻译
Recent empirical works have successfully used unlabeled data to learn feature representations that are broadly useful in downstream classification tasks. Several of these methods are reminiscent of the well-known word2vec embedding algorithm: leveraging availability of pairs of semantically "similar" data points and "negative samples," the learner forces the inner product of representations of similar pairs with each other to be higher on average than with negative samples. The current paper uses the term contrastive learning for such algorithms and presents a theoretical framework for analyzing them by introducing latent classes and hypothesizing that semantically similar points are sampled from the same latent class. This framework allows us to show provable guarantees on the performance of the learned representations on the average classification task that is comprised of a subset of the same set of latent classes. Our generalization bound also shows that learned representations can reduce (labeled) sample complexity on downstream tasks. We conduct controlled experiments in both the text and image domains to support the theory.
translated by 谷歌翻译
开普勒和苔丝任务产生了超过100,000个潜在的传输信号,必须处理,以便创建行星候选的目录。在过去几年中,使用机器学习越来越感兴趣,以分析这些数据以寻找新的外延网。与现有的机器学习作品不同,exoMiner,建议的深度学习分类器在这项工作中,模仿域专家如何检查诊断测试以VET传输信号。 exoMiner是一种高度准确,可说明的和强大的分类器,其中1)允许我们验证来自桅杆开口存档的301个新的外延网,而2)是足够的,足以应用于诸如正在进行的苔丝任务的任务中应用。我们进行了广泛的实验研究,以验证exoMiner在不同分类和排名指标方面比现有的传输信号分类器更可靠,准确。例如,对于固定精度值为99%,exoMiner检索测试集中的93.6%的所有外产网(即,召回= 0.936),而最佳现有分类器的速率为76.3%。此外,exoMiner的模块化设计有利于其解释性。我们介绍了一个简单的解释性框架,提供了具有反馈的专家,为什么exoMiner将运输信号分类为特定类标签(例如,行星候选人或不是行星候选人)。
translated by 谷歌翻译
Cooperative multi-agent reinforcement learning (MARL) has achieved significant results, most notably by leveraging the representation-learning abilities of deep neural networks. However, large centralized approaches quickly become infeasible as the number of agents scale, and fully decentralized approaches can miss important opportunities for information sharing and coordination. Furthermore, not all agents are equal -- in some cases, individual agents may not even have the ability to send communication to other agents or explicitly model other agents. This paper considers the case where there is a single, powerful, \emph{central agent} that can observe the entire observation space, and there are multiple, low-powered \emph{local agents} that can only receive local observations and are not able to communicate with each other. The central agent's job is to learn what message needs to be sent to different local agents based on the global observations, not by centrally solving the entire problem and sending action commands, but by determining what additional information an individual agent should receive so that it can make a better decision. In this work we present our MARL algorithm \algo, describe where it would be most applicable, and implement it in the cooperative navigation and multi-agent walker domains. Empirical results show that 1) learned communication does indeed improve system performance, 2) results generalize to heterogeneous local agents, and 3) results generalize to different reward structures.
translated by 谷歌翻译